Computer vision - Lab 5¶

Martyna Stasiak id.156071

Agenda¶

  • frequency domain image,
  • image analysis in the frequency domain,
  • high-pass and low-pass filtering,
  • optimization of the filtering process by convolution operation using the frequency domain

Helpers¶

To perform the tasks, it is necessary to import the libraries used in the script and download the data on which we will be working.

In this script we will be using:

  • Image Lenna (available at the link) - one of the most popular images historically used for testing image processing and compression,
In [298]:
# importing of libraires that will be use in the script
import cv2
import matplotlib.pyplot as plt
import numpy as np
import PIL
%matplotlib inline
from pandas import DataFrame
import pandas as pd
from IPython.display import display, HTML
from skimage.exposure import rescale_intensity
import plotly.graph_objects as go
import pandas as pd
import json
import os

pd.options.display.html.border = 0
pd.options.display.float_format = '{:,.2f}'.format
In [299]:
# download Lenna image
!wget -O lena_std.tif http://www.lenna.org/lena_std.tif
--2024-11-13 21:04:34--  http://www.lenna.org/lena_std.tif
Resolving www.lenna.org (www.lenna.org)... 107.180.37.106
Connecting to www.lenna.org (www.lenna.org)|107.180.37.106|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 786572 (768K) [image/tiff]
Saving to: ‘lena_std.tif’

lena_std.tif        100%[===================>] 768.14K  2.02MB/s    in 0.4s    

2024-11-13 21:04:35 (2.02 MB/s) - ‘lena_std.tif’ saved [786572/786572]

The colab platform requires a special way to display images with opencv. If the notebook is run in collab, execute the following code:

In [300]:
if "google.colab" in str(get_ipython()):
    from google.colab.patches import cv2_imshow

    imshow = cv2_imshow
else:

    def imshow(img):
        img = img.clip(0, 255).astype("uint8")
        if img.ndim == 3:
            if img.shape[2] == 4:
                img = cv2.cvtColor(img, cv2.COLOR_BGRA2RGBA)
            else:
                img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
        display(PIL.Image.fromarray(img))
In [301]:
def h_color(a, interpolation=None, size=None, fy=1.5, fx=1.5, cmap="gray"):
    s = [int(a.shape[0] * fy), int(a.shape[1] * fx)] if size is None else size
    plt.figure(figsize=s)
    plt.tick_params(
        axis="both",
        which="both",
        bottom=False,
        top=False,
        labelbottom=False,
        labelleft=False,
        left=False,
        right=False,
    )
    plt.imshow(a, cmap=cmap, interpolation=interpolation)
In [302]:
css = """
<style type="text/css">
  table, td, table.dataframe, table.dataframe td {
    border: 1px solid black;    //border: double;
    border-collapse: collapse;
    border-style: solid;
    border-spacing: 0px;
    background-color: rgb(250,250,250);
    width: 24px;
    height: 24px;
    text-align: center;
    transform: scale(1.0);
    margin: 5px;
    }
</style>
"""


def h(s):
    return display(HTML(css + DataFrame(s).to_html(header=False, index=False)))
In [303]:
def h_color_3d(z):
    fig = go.Figure(data=[go.Surface(z=z)])
    fig.update_layout(autosize=False, width=500, height=500)
    fig.show()

Image in the frequency domain¶

Previous examples of image processing dealt with the domain of intensity. Another area where an image can be represented is frequency. The frequency domain image should be viewed as information about which pixels contain some frequently repeating pattern and which are little repeating patterns.

For example, an image that is very blurry will have a lot of repeating fragments (blur is a kind of repeating function). However, a very sharp image will have much less such patterns (because neighboring pixels will be very different from each other).

An example of the transition from the intensity domain to the frequency domain is shown below.

In [304]:
img = cv2.imread("./lena_std.tif", 1)
img_grayscale = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

imshow(img)

Fourier transform¶

The Fourier transform is given by the following formula:

$${\mathcal{F}}(\omega) = \int_{-\infty}^{\infty} f(x) e^{-2\pi i x \omega} dx = \int_{-\infty}^{\infty} f(x) (\cos(-2\pi i x \omega) + i \sin(-2\pi i x \omega)) dx $$

for 2d signal:

$${\mathcal{F}}(u,v) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f(x,y) e^{-2\pi i (ux+vy)} dx dy$$

Where $u,v$ are frequencies, $ f (x,y) $ is an image on an intensity scale, $ i ^ 2 = -1 $ - an imaginary number. The above formula can be interpreted as follows:

  • ${\mathcal{F}}(u,v) $ it is performed for each frequency $u$ and $v$,
  • for each $ {\mathcal {F}} (u,v) $, the input image is multiplied by the component given in the formula (with the corresponding $ u,v $) and the resultant summed. It can also be treated as a weighted sum of each pixel, where the weight is the component given in the formula,

The transformation from the intensity domain to the frequency domain is performed using the Fourier transform, which is implemented in OpenCV or NumPy. An example of using the NumPy library is presented below.

The fft2 function performs a Fourier Transform for 2-dimensional signals. Then, using the fftshift function, we shift the data in such a way that the pixels corresponding to the high frequencies are closer to the center and the low frequency pixels at the edges of the image (in the frequency domain).

In [305]:
f = np.fft.fft2(img_grayscale)
fshift = np.fft.fftshift(f)
magnitude_spectrum = 20 * np.log(np.abs(fshift))

The inverse transform comes down to performing an inverse pixel shift and performing the operation ifft2, which results in an image in the intensity domain (it is necessary to take the real part by real(), because the result is in a complex form).

In [306]:
f_ishift = np.fft.ifftshift(fshift)
img_back = np.fft.ifft2(f_ishift)
img_back = np.real(img_back)

imshow(np.concatenate([img_grayscale, magnitude_spectrum, img_back], 1))

Interpretation of the frequency domain¶

Having the inverse transform (frequency -> intensity), we can do a few experiments to approximate the relationship between pixel values in the frequency domain and the intensity.

In [307]:
def get_mask(s, div):
    mask = np.zeros(s, np.float32)
    return cv2.circle(mask, (s[0] // 2, s[1] // 2), s[0] // div, 1, -1)


def fft(img, size=None): #converting to frequency domain, we center high hz for vizualization
    f = np.fft.fft2(img, size)
    fshift = np.fft.fftshift(f)
    spectrum = 20 * np.log(np.abs(fshift))
    return fshift, spectrum


def ifft(fshift): #converting back to intensity domain
    f_ishift = np.fft.ifftshift(fshift)
    img_back = np.fft.ifft2(f_ishift)
    return np.real(img_back)


def showFreqAndImages(frequencies, images):
    fig, axes = plt.subplots(2, len(frequencies), figsize=(20, 10))
    for i, (freq, img) in enumerate(zip(frequencies, images)):
        axes[0, i].imshow(freq, cmap="gray", vmin=0, vmax=255)
        axes[1, i].imshow(img, cmap="gray", vmin=0, vmax=255)
    plt.show()

Let's define an image (256 x 256) filled with all zeros with high values set at certain positions. These coordinates will be respectively (where $S_h = height / 2 = 128$ and $S_w = width / 2 = 128$):

  • $(S_h, S_w)$
  • $(S_h, S_w - 1)$
  • $(S_h, S_w + 1)$
  • $(S_h, S_w - 1)$, $(S_h, S_w + 1)$
In [308]:
sw = 256 // 2
sh = 256 // 2

freq_1 = np.zeros([256, 256], np.float32)
freq_1[sh, sw] = 10000000.0

freq_2 = np.zeros([256, 256], np.float32)
freq_2[sh, sw - 1] = 10000000.0

freq_3 = np.zeros([256, 256], np.float32)
freq_3[sh, sw + 1] = 10000000.0

freq_4 = np.zeros([256, 256], np.float32)
freq_4[sh, sw - 1] = 10000000.0
freq_4[sh, sw + 1] = 10000000.0


img_i_1 = ifft(freq_1)
img_i_2 = ifft(freq_2)
img_i_3 = ifft(freq_3)
img_i_4 = ifft(freq_4)


showFreqAndImages(
    [freq_1, freq_2, freq_3, freq_4], [img_i_1, img_i_2, img_i_3, img_i_4]
)

Similarly, for changing pixel values vertically:

  • $(S_h, S_w)$
  • $(S_h - 1, S_w)$
  • $(S_h + 1, S_w)$
  • $(S_h - 1, S_w)$, $(S_h + 1, S_w)$
In [309]:
sw = 256 // 2
sh = 256 // 2

freq_1 = np.zeros([256, 256], np.float32)
freq_1[sh, sw] = 10000000.0
freq_2 = np.zeros([256, 256], np.float32)
freq_2[sh - 1, sw] = 10000000.0
freq_3 = np.zeros([256, 256], np.float32)
freq_3[sh + 1, sw] = 10000000.0
freq_4 = np.zeros([256, 256], np.float32)
freq_4[sh - 1, sw] = 10000000.0
freq_4[sh + 1, sw] = 10000000.0


img_i_1 = ifft(freq_1)
img_i_2 = ifft(freq_2)
img_i_3 = ifft(freq_3)
img_i_4 = ifft(freq_4)

showFreqAndImages(
    [freq_1, freq_2, freq_3, freq_4], [img_i_1, img_i_2, img_i_3, img_i_4]
)
In [310]:
#just checking if i understand correct
freq_0 = np.zeros([256, 256], np.float32)
freq_0[sh, sw] = 10000000.0

freq_00 = np.zeros([256, 256], np.float32)
freq_00[sh + 50, sw] = 10000000.0

img_i_0 = ifft(freq_0)
img_i_00 = ifft(freq_00)

showFreqAndImages(
    [freq_0, freq_00], [img_i_0, img_i_00]
)

The above experiment provides the following conclusions:

  • the middle pixel corresponds to a uniform image (frequency equal to zero),
  • pixels deflected by 1 to the left and right represent the same results because they correspond to signals with opposite frequencies),
  • the horizontal changes correspond to the 2D horizontal signals in the image in the intensity domain; similarly for vertical changes.

By setting high values further from the center, the inverse Fourier transform operation will create intensity-domain images containing signals of higher frequency.

In [311]:
sw = 256 // 2
sh = 256 // 2

freq_1 = np.zeros([256, 256], np.float32)
freq_1[sh - 1, sw] = 10000000.0
freq_1[sh + 1, sw] = 10000000.0
freq_1[sh, sw - 1] = 10000000.0
freq_1[sh, sw + 1] = 10000000.0
freq_2 = np.zeros([256, 256], np.float32)
freq_2[sh - 1, sw - 1] = 10000000.0
freq_2[sh + 1, sw + 1] = 10000000.0
freq_2[sh - 1, sw + 1] = 10000000.0
freq_2[sh + 1, sw - 1] = 10000000.0
freq_3 = np.zeros([256, 256], np.float32)
freq_3[sh, sw - 2] = 10000000.0
freq_4 = np.zeros([256, 256], np.float32)
freq_4[sh, sw - 8] = 10000000.0

img_i_1 = ifft(freq_1)
img_i_2 = ifft(freq_2)
img_i_3 = ifft(freq_3)
img_i_4 = ifft(freq_4)

imshow(np.concatenate([freq_1, freq_2, freq_3, freq_4], 1))
imshow(np.concatenate([img_i_1, img_i_2, img_i_3, img_i_4], 1))

Operations in the frequency domain¶

Image processing in the frequency domain is closely related to the concept of intensity convolution. According to the theorem resulting from the Fourier transform (link) we can write:

$$(f \bullet g)(t) = {\mathcal{F}}^{-1}\{F * G\}$$

where:

  • $f$, $F$ - input image in the intensity and frequency domains, respectively,
  • $g$, $G$ - kernel / filter (e.g. Laplasjan) in intensity and frequency domains, respectively,
  • ${\mathcal{F}^{-1}}$ - inverse Fourier transform
  • $\bullet$ - convolution
  • $*$ - multiplication

The above theory says that the operation of the convolution of two functions in the domain of time or space is equivalent to the operation of multiplication (element-wise) of their representation in the frequency space.

Note: In the case of a Discrete Fourier Transform (DFT), convolution creates a circular convolution, where the signal wraps around at the edges instead of extending linearly. This is generally not the desired outcome for linear convolution, as it leads to aliasing and incorrect results. To avoid this, padding is added to the FFT so that the size is $N+M-1$, where N is the size of the signal and M is the size of the kernel.

Computational complexity of convolution

For two sequences of lengths N and M , direct convolution requires each element of one sequence to be multiplied with every element of the other sequence, and then summed up for each shift. This gives a time complexity of: $O(NM)$

For FFT-based convolution with padding $L = N+M-1$ the complexity is $O(L$ $log$ $L)$

Earlier transformations between the domains showed that an image of a certain size (in the intensity domain) would have the same size in the frequency domain. The size directly affects the number of maximum frequencies, hence, in order to present an image of a smaller size in a greater number of frequencies, the image can be completed (as in the case of convolution) with only zeros.

When using the OpenCV implementation, you need to do it manually, while when implementing NumPy, you just need to add the size information as the second parameter of the fft2 function.

In [312]:
mean_filter = np.ones((3, 3)) #mean filter blurs img by averaging the neighbouring pixels

x = cv2.getGaussianKernel(5, 10) #more smooth blur
gaussian = x * x.T

laplacian = np.array([[0, 1, 0], [1, -4, 1], [0, 1, 0]]) #laplacian filter highlights edges by emphasizing areas of rapid intensity change


filters = [mean_filter, gaussian, laplacian]
fft_filters = [np.fft.fft2(x, (256, 256)) for x in filters] #pads each filter to a 256x256 size and applies the 2DFourier transform to convert it to the frequency domain
fft_shift = [np.fft.fftshift(y) for y in fft_filters] #centers the high frequencies in each transformed filter
mag_spectrum = [np.log(np.abs(z) + 1) for z in fft_shift]

imshow(np.concatenate(mag_spectrum, 1) * 255)

Having images in the same domain defined by the same number of frequencies, according to the theory cited, we can perform the equivalent of the convolution operation using simple multiplication in the frequency domain.

In [313]:
laplacian = np.array([[0, 1, 0], [1, -4, 1], [0, 1, 0]]) #laplaciam filter -> highlights edges


f_lap_shift, f_lap_mag = fft(laplacian, (512, 512))
fshift, spectrum = fft(img_grayscale)
img_i = np.real(ifft(fshift * f_lap_shift))

imshow(np.concatenate([img_grayscale, np.real(spectrum * f_lap_shift), img_i], 1))

img_i = cv2.filter2D(img_grayscale, -1, laplacian)
imshow(np.concatenate([img_grayscale, img_i], 1))
<ipython-input-307-fa5bd06b1e79>:9: RuntimeWarning: divide by zero encountered in log
  spectrum = 20 * np.log(np.abs(fshift))

Low Pass Filter¶

Frequency domain filtering can also be divided into low pass and high pass.

Low-pass filtering is one that passes low-frequency signals through and cuts the signals above a certain threshold. As you know, high-frequency signals are at the edges of the image, and the closer to the center, the lower the frequency of the signals.

The low-pass filtering implemented as the cut-off point is shown below.

In [314]:
m1 = get_mask(img_grayscale.shape, 2)
m2 = get_mask(img_grayscale.shape, 8)
m3 = get_mask(img_grayscale.shape, 16)

#fshift -> The frequency domain representation of the image
#spectrum -> magnitude spectrum, a visualization that represents the strength of each frequency component
fshift, spectrum = fft(img_grayscale)

img_back_1 = ifft(fshift * m1)
img_back_2 = ifft(fshift * m2)
img_back_3 = ifft(fshift * m3)

imshow(np.concatenate([img_grayscale, spectrum * m1, img_back_1], 1))
imshow(np.concatenate([img_grayscale, spectrum * m2, img_back_2], 1))
imshow(np.concatenate([img_grayscale, spectrum * m3, img_back_3], 1))

The result of low-pass filters should usually be a blur effect due to the fact that we get rid of areas of high variability (such as edges, details, but also noise).

High Pass Filter¶

The filters from the high-pass family work in a similar way. By passing signals of only high frequency, we extract places with edges, sharp image or simply noise.

A high-pass filter implemented as cutting off signals below a certain threshold is presented below.

In [315]:
m1 = 1 - get_mask(img_grayscale.shape, 2)
m2 = 1 - get_mask(img_grayscale.shape, 16)
m3 = 1 - get_mask(img_grayscale.shape, 32)

fshift, spectrum = fft(img_grayscale)

img_back_1 = ifft(fshift * m1)
img_back_2 = ifft(fshift * m2)
img_back_3 = ifft(fshift * m3)

imshow(np.concatenate([img_grayscale, spectrum * m1, img_back_1], 1))
imshow(np.concatenate([img_grayscale, spectrum * m2, img_back_2], 1))
imshow(np.concatenate([img_grayscale, spectrum * m3, img_back_3], 1))

Task 1¶

Check and argue which of the filters listed below is low pass and which is high pass.

Filters:

  • averaging,
  • gaussian,
  • sobel,
  • laplasjan

answers with reasoning at the bottom

In [316]:
#averaging (mean), gaussian and laplacian taken form the previous cells
mean_filter = np.ones((3, 3)) / 9 #mean filter blurs img by averaging the neighbouring pixels

x = cv2.getGaussianKernel(5, 10) #more smooth blur
gaussian = x * x.T
sobel_x = np.array([[1, 0, -1], [2, 0, -2], [1, 0, -1]])
sobel_y = np.array([[1, 2, 1], [0, 0, 0], [-1, -2, -1]])
laplacian = np.array([[0, 1, 0], [1, -4, 1], [0, 1, 0]])

filters = [mean_filter, gaussian, sobel_x, sobel_y, laplacian]
fft_filters = [np.fft.fft2(x, (512, 512)) for x in filters] #pads each filter to a 256x256 size and applies the 2DFourier transform to convert it to the frequency domain
fft_shift = [np.fft.fftshift(y) for y in fft_filters] #centers the high frequencies in each transformed filter
mag_spectrum = [np.log(np.abs(z) + 1) for z in fft_shift]
print("Magnitude spectrums for the filters:")
print("1) Averaging Filter - Low Pass")
print("2) Gaussian Filter - Low Pass")
print("3) Sobel X Filter - High Pass")
print("4) Sobel Y Filter - High Pass")
print("5) Laplasjan Filter - High Pass")
imshow(np.concatenate(mag_spectrum, 1) * 255)


fshift, spectrum = fft(img_grayscale)
filtered_images = [np.real(ifft(fshift * fft(filter, (512, 512))[0])) for filter in filters]
print("Lenna Image After Applying Low-Pass and High-Pass Filters in Frequency Domain:")
imshow(np.concatenate(filtered_images, 1))

#now just focusing on the sobel to get the magnitude spectrum for 'combined' sobel
sobel_filters = [sobel_x, sobel_y]

fft_filters_sobel = [np.fft.fft2(x, (512, 512)) for x in sobel_filters]
fft_shift_sobel = [np.fft.fftshift(y) for y in fft_filters_sobel]
combined_sobel_freq = np.sqrt(np.sum([np.abs(x)**2 for x in fft_shift_sobel], axis=0))
mag_spectrum_sobel = np.log(np.abs(combined_sobel_freq) + 1)

print("Magnitude spectrum for the combined sobel (sobel_x and sobel_y):")
imshow(mag_spectrum_sobel * 255)
Magnitude spectrums for the filters:
1) Averaging Filter - Low Pass
2) Gaussian Filter - Low Pass
3) Sobel X Filter - High Pass
4) Sobel Y Filter - High Pass
5) Laplasjan Filter - High Pass
<ipython-input-307-fa5bd06b1e79>:9: RuntimeWarning: divide by zero encountered in log
  spectrum = 20 * np.log(np.abs(fshift))
Lenna Image After Applying Low-Pass and High-Pass Filters in Frequency Domain:
Magnitude spectrum for the combined sobel (sobel_x and sobel_y):

Averaging Filter - *Low Pass*

The averiging filter is a low pass one; this is because in the center of its magnitude spectrum there is white and that meand that the center pixels are activated, so the low frequency ones; also as we know averaging filter is responsible for blurring and that operation is a low pass one.

Gaussian Filter - *Low Pass*

Same as in the averaging filter, in the Gaussian filter in the center of the magnitude spectrum we see white, so th low frequency pixels are activated, meaning it is a low pass filter; also same as averaging filter, Gaussian one is a blur filter, so we know that it is a low pass.

Sobel Filter - *High pass*

The Sobel filter is a high pass one since in the center of the magnitudespectrum it has a dark spot; depending on the type of the sobel filter- horizontal or vertical one we see a dark line going horizontaly or verticaly, but both times going through the center;
when we combine the horizontal and vertical sobel filter we also get that in the magnitude spectrum is black - not activated; That all means that it is a high pass filter, since the higher frequency pixels are activated;
also we know that the sobel filter is used for the edge detection, so it uses, activates the high frequency pixels, so from that alone we can say that it is a high pass filter.

Laplasjan Filter - *High Pass*

The Laplasjan filter also the same s the Sobel Filter has the dark center, meaning that the high frequency pixels are only activated, meaning that the filter is a high pass;
Also the Laplasjan filter is used for the edge detection (same as Sobel) so as previously, from just that we may say that the filter is a high pass one.


Task 2¶

Perform edge detection (vertically and horizontally separately) using Sobel's frequency domain filters, then combine the detected features into one image and present the results. Compare the frequency domain performance with the intensity domain performance of the Sobel filter. Note: The resulting images from both methods should be the same.

In [317]:
sobels = [sobel_x, sobel_y]

#padding
pad_v = (sobels[0].shape[0] - 1) // 2
#print(pad_v)
pad_h = (sobels[0].shape[1] - 1) // 2
extra_pad = 2
img_grayscale_padded = cv2.copyMakeBorder(img_grayscale, pad_v + extra_pad, pad_v + extra_pad, pad_h + extra_pad, pad_h + extra_pad, cv2.BORDER_REFLECT)


fshift, spectrum = fft(img_grayscale_padded)
sob_i = []
for sobel in sobels:
  f_sob_shift, f_sob_mag = fft(sobel, img_grayscale_padded.shape)
  img_i = np.real(ifft(fshift * f_sob_shift))
  img_i_cropped = img_i[pad_v + extra_pad + 1: pad_v - extra_pad -1, pad_v + extra_pad + 1: pad_v - extra_pad -1]
  sob_i.append(img_i_cropped)

combined_edges_freq = np.sqrt(sob_i[0]**2 + sob_i[1]**2)
#frequency domain
print("Frequency domain:")
imshow(np.concatenate([img_grayscale, sob_i[0], sob_i[1], combined_edges_freq], 1))

#intensity domain when using cv2.filter2D eves are a bit different but they are the same in the cv2.Sobel fun
img_i_1 = cv2.filter2D(img_grayscale.astype(np.float32),  -1, sobel_x)
img_i_2 = cv2.filter2D(img_grayscale.astype(np.float32),  -1, sobel_y)
combined_edges_intensity = np.sqrt(img_i_1**2 + img_i_2**2)
print("\n\nIntensity domain:")
imshow(np.concatenate([img_grayscale, img_i_1, img_i_2, combined_edges_intensity], 1))


same_pixels = np.sum(np.abs(combined_edges_freq - combined_edges_intensity) < 2)
different_pixels = np.sum(np.abs(combined_edges_freq - combined_edges_intensity) >= 2)

print(f"Number of same pixels between intensity and frequency domain in final picture: {same_pixels}")
print(f"Number of different pixels between intensity and frequency domain in final picture: {different_pixels}\n\n\n")


#just to see the cv2.sobel
img_i_1 = cv2.Sobel(img_grayscale, cv2.CV_64F, 1, 0, ksize=3)
img_i_2 = cv2.Sobel(img_grayscale, cv2.CV_64F, 0, 1, ksize=3)
combined_edges_intensity = np.sqrt(img_i_1**2 + img_i_2**2)
imshow(np.concatenate([img_grayscale, img_i_1, img_i_2, combined_edges_intensity], 1))

same_pixels = np.sum(np.abs(combined_edges_freq - combined_edges_intensity) < 2)
different_pixels = np.sum(np.abs(combined_edges_freq - combined_edges_intensity) >= 2)

print(f"Number of same pixels between intensity and frequency domain in final picture: {same_pixels}")
print(f"Number of different pixels between intensity and frequency domain in final picture: {different_pixels}\n\n\n")
<ipython-input-307-fa5bd06b1e79>:9: RuntimeWarning: divide by zero encountered in log
  spectrum = 20 * np.log(np.abs(fshift))
Frequency domain:

Intensity domain:
Number of same pixels between intensity and frequency domain in final picture: 261761
Number of different pixels between intensity and frequency domain in final picture: 383



Number of same pixels between intensity and frequency domain in final picture: 261761
Number of different pixels between intensity and frequency domain in final picture: 383